5,611 research outputs found

    Addendum to "Nonlinear quantum evolution with maximal entropy production"

    Get PDF
    The author calls attention to previous work with related results, which has escaped scrutiny before the publication of the article "Nonlinear quantum evolution with maximal entropy production", Phys.Rev.A63, 022105 (2001).Comment: RevTex-latex2e, 2pgs., no figs.; brief report to appear in the May 2001 issue of Phys.Rev.

    Quantum thermodynamic Carnot and Otto-like cycles for a two-level system

    Get PDF
    From the thermodynamic equilibrium properties of a two-level system with variable energy-level gap Δ\Delta, and a careful distinction between the Gibbs relation dE=TdS+(E/Δ)dΔdE = T dS + (E/\Delta) d\Delta and the energy balance equation dE=ήQ←−ήW→dE = \delta Q^\leftarrow - \delta W^\to, we infer some important aspects of the second law of thermodynamics and, contrary to a recent suggestion based on the analysis of an Otto-like thermodynamic cycle between two values of Δ\Delta of a spin-1/2 system, we show that a quantum thermodynamic Carnot cycle, with the celebrated optimal efficiency 1−(Tlow/Thigh)1 - (T_{low}/T_{high}), is possible in principle with no need of an infinite number of infinitesimal processes, provided we cycle smoothly over at least three (in general four) values of Δ\Delta, and we change Δ\Delta not only along the isoentropics, but also along the isotherms, e.g., by use of the recently suggested maser-laser tandem technique. We derive general bounds to the net-work to high-temperature-heat ratio for a Carnot cycle and for the 'inscribed' Otto-like cycle, and represent these cycles on useful thermodynamic diagrams.Comment: RevTex4, 4 pages, 1 figur

    A variational algorithm for the detection of line segments

    Get PDF
    In this paper we propose an algorithm for the detection of edges in images that is based on topological asymptotic analysis. Motivated from the Mumford--Shah functional, we consider a variational functional that penalizes oscillations outside some approximate edge set, which we represent as the union of a finite number of thin strips, the width of which is an order of magnitude smaller than their length. In order to find a near optimal placement of these strips, we compute an asymptotic expansion of the functional with respect to the strip size. This expansion is then employed for defining a (topological) gradient descent like minimization method. As opposed to a recently proposed method by some of the authors, which uses coverings with balls, the usage of strips includes some directional information into the method, which can be used for obtaining finer edges and can also result in a reduction of computation times

    A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing

    Full text link
    We present a fast general-purpose algorithm for high-throughput clustering of data "with a two dimensional organization". The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In fact, its core does not specifically rely on the kind of data or detector it is working for, while the second step can and should be tailored for a given application. Applications can thus be foreseen to other detectors and other scientific fields ranging from HEP calorimeters to medical imaging. An additional advantage of this two steps approach is that the typical clustering related calculations (second step) are separated from the combinatorial complications of clustering. This separation simplifies the design of the second step and it enables it to perform sophisticated calculations achieving online-quality in online applications. The algorithm is general purpose in the sense that only minimal assumptions on the kind of clustering to be performed are made.Comment: 11th Frontier Detectors For Frontier Physics conference (2009

    Thermodynamic analysis of turbulent combustion in a spark ignition engine. Experimental evidence

    Get PDF
    A method independent of physical modeling assumptions is presented to analyze high speed flame photography and cylinder pressure measurements from a transparent piston spark ignition research engine. The method involves defining characteristic quantities of the phenomena of flame propagation and combustion, and estimating their values from the experimental information. Using only the pressure information, the mass fraction curves are examined. An empirical burning law is presented which simulates such curves. Statistical data for the characteristics delay and burning angles which show that cycle to cycle fractional variations are of the same order of magnitude for both angles are discussed. The enflamed and burnt mass fractions are compared as are the rates of entrainment and burning

    Electrons diffusion and signal noise contributions on electron clusters detection efficiency

    Get PDF
    The Cluster Counting (CC) technique, proposed for dE/dx measurements with the SuperB drift-chamber, could, significantly, improve particle identification by avoiding the fluctuations involved in charge measurements. As the technique is quite sensitive to the detector working conditions and to the front-end chain response, in this note we have investigated the effects on clusters detection efficiency of electron diffusion, preamplifier frequencyresponse and Signal-to-Noise Ratio (SNR) using different algorithms. The evaluation is based on Garfield datasets, generated for a single cell geometry, at different impact points for π /”/e particles with momenta 120, 140, 160, 180 and 210 MeV. The current waveforms generated by Garfield have been shaped according to the preamplifier response and different amounts of white gaussian noise has been added to the waveforms to simulate different SNRs. Finally an estimation of π /”/e separation is shown

    Use of degree of disequilibrium analysis to select kinetic constraints for the rate-controlled constrained-equilibrium (RCCE) method

    Get PDF
    The Rate-Controlled Constrained-Equilibrium (RCCE) method provides a general framework that enables, with the same ease, reduced order kinetic modelling at three different levels of approximation: shifting equilibrium, frozen equilibrium, as well as non-equilibrium chemical kinetics. The method in general requires a significantly smaller number of differential equations than the dimension of the underlying Detailed Kinetic Model (DKM) for acceptable accuracies. To provide accurate approximations, however, the method requires accurate identification of the bottleneck kinetic mechanisms responsible for slowing down the relaxation of the state of the system towards local chemical equilibrium. In other words, the method requires that such bottleneck mechanisms be characterized by means of a set of representative constraints. So far, a drawback of the RCCE method has been the absence of a systematic algorithm that would allow a fully automatable identification of the best constraints for a given range of thermodynamic conditions and a required level of approximation. In this paper, we provide the first of two steps towards such algorithm based on the analysis of the degrees of disequilibrium (DoD) of chemical reactions in the underlying DKM. In any given DKM the number of rate-limiting kinetic bottlenecks is generally much smaller than the number of species in the model. As a result, the DoDs of all the chemical reactions effectively assemble into a small number of groups that bear the information of the rate-controlling constraints. The DoDs of all reactions in each group exhibit almost identical behaviour (time evolution, spatial dependence). Upon identification of these groups, the proposed kernel analysis of N matrices that are obtained from the stoichiometric coefficients yields the N constraints that effectively control the dynamics of the system. The method is demonstrated within the framework of modeling the expansion of products of the oxy-combustion of hydrogen through a quasi one-dimensional supersonic nozzle. The analysis predicts and RCCE simulations confirm that, under the geometrical and boundary conditions considered, the underlying DKM is accurately represented by only two bottleneck kinetic mechanisms, instead of the three constraints identified for the same problem in a recently published work also based, in part, on DoD analysis

    Nonlinear Dynamical Equation for Irreversible, Steepest-Entropy-Ascent Relaxation to Stable Equilibrium

    Full text link
    We discuss the structure and main features of the nonlinear evolution equation proposed by this author as the fundamental dynamical law within the framework of Quantum Thermodynamics. The nonlinear equation generates a dynamical group providing a unique deterministic description of irreversible, conservative relaxation towards equilibrium from any non-equilibrium state, and satisfies a very restrictive stability requirement equivalent to Hatsopoulos-Keenan statement of the second law of thermodynamics. Here, we emphasize its mathematical structure and its applicability also within other contexts, such as Classical and Quantum Statistical Mechanics, and Information Theory.Comment: Proceedings of the Conference "Quantum Theory: Reconsideration of Foundations - 4", Vaxjo, Sweden, June 11-16, 200
    • 

    corecore